HCI - Reading Notes

Week 1

MacKenzie, I.S. (2013). Chapter 1: Historical Context. Human-Computer Interaction: An Empirical Research Perspective.

  • predecessor to HCI is human factors or ergonomics
  • concerned with human capabilities, limitation, performance, and designs that fit within these params
  • HCI is narrowing this definition to human interaction with computing technology

Historical Context

  • "As we may think", Vannevar Bush - memex ("associative thinking"), connect points of interest. ie. hyperlink/bookmarks.
  • Ivan Sutherland's Sketchpad - manipulation of geometric shapes and lines (objects) on a display using a light pen. Significance: direct manipulation of the interface.
  • Invention of the mouse (1964) - Invented by Douglas Engelbart. Direct manipulation on screen. Require "on-screen tracker" to establish correspondence between device space and display space.
    • Other devices, joystick, lightpen, knee-controlled lever, "grafacon". Evaluation, mouse is most accurate, knee-lever is fastest.
  • Xerox star (1981) - first commercially released computer system with a GUI. It had windows, icons, menus, and a pointing device (WIMP). It supported direct manipulation and what-you-see-is-what-you-get (WYSIWYG) interaction.
    • Breaks from CLIs, which use a sequential programming paradigm.
    • Direct manipulation requires different approach. Uses event-driven programming.
  • Birth of HCI (1983) - Three key events as markers: the first ACM SIGCHI conference, the publication of Card, Moran, and Newell’s The Psychology of Human-Computer Interaction (1983), and the arrival of the Apple Macintosh, pre-announced with flyers in December 1983
    • ACM SIGCHI conference - association of professionals who work in the research and practice of computer-human interaction
    • "Psychology of human-computer interaction", Card, Moran, Newel - Concepts, human perceptual input (e.g., the time to visually perceive a stimulus), cognition (e.g., the time to decide on the appropriate reaction), and motor output (e.g., the time to react and move the hand or cursor to a target).
      • Significance: Theory for designers of interfaces. "Convincingly demonstrates why and how models are important and to teach us how to build them."
      • Context is the milieu of basic research in human-computer interaction and related fields.
      • "Whether generating quanti- tative predictions across alternative design choices or delimiting a problem space to reveal new relationships, a model’s purpose is to tease out strengths and weak- nesses in a hypothetical design and to elicit opportunities to improve the design."
    • Apple Macintosh - like Xerox Star, but catered to masses.

Growth of GUIs

Growth of HCI Research

Early topics:

  • Quality, effectiveness, and efficiency of the interface. How quickly and accurately can people do common tasks using a GUI versus a text-based command-line interface?
  • Menu design - recognition (selecting a command) vs recall (typing), depth vs breadth.

Norman, D. (2013). Chapter 1: The Psychopathology of Everyday Things

Two of the most important characteristics of good design are discoverability and understanding

  1. Discoverability: Is it possible to even figure out what actions are possible and where and how to perform them?
  2. Understanding: What does it all mean? How is the product supposed to be used? What do all the different controls and settings mean?

Fields of design:

  1. Industrial design: The professional service of creating and developing concepts and specifications that optimize the function, value, and appearance of products and systems for the mutual benefit of both user and manufacturer (from the Industrial Design Society of America’s website).
  2. Interaction design: The focus is upon how people interact with technology. The goal is to enhance people’s understanding of what can be done, what is happening, and what has just occurred. Interaction design draws upon principles of psychology, design, art, and emotion to ensure a positive, enjoyable experience.
  3. Experience design: The practice of designing products, processes, services, events, and environments with a focus placed on the quality and enjoyment of the total experience.

"We must design our machines on the assumption that people will make errors."

The role of HCD and Design Specializations

  • Starting with a good understanding of people and the needs that the design is intended to meet.
  • Getting the specification of the thing to be defined is one of the most difficult parts of the design, so much so that the HCD principle is to avoid specifying the problem as long as possible but instead to iterate upon repeated approximations.

Fundamental Principles of Interaction

  • Discoverability results from appropriate application of five fundamental psychological concepts covered in the next few chapters: affordances, signifiers, constraints, mappings, and feedback.
  • But there is a sixth principle: the conceptual model of the system. It is the conceptual model that provides true understanding

Affordance

  • The term affordance refers to the relationship between a physical object and a person.
  • An affordance is a relationship between the properties of an object and the capabilities of the agent that determine just how the object could possibly be used. E.g. chair affords ("is for") support, and therefore, affords sitting.
  • Affordances exist even if they are not visible. For designers, their visibility is critical: visible affordances provide strong clues to the operations of things

Signifiers

  • Signifiers: If an affordance or anti-affordance cannot be perceived, some means of signaling its presence is required
  • Signifier refers to any mark or sound, any perceivable indicator that communicates appropriate behavior to a person.
  • Clarification: A sign is NOT an affordance, it is a signifier.
  • Can be delibarte ("PUSH" sign) or unintentional (newly created path by humans).

Affordance vs Signifiers:

  • Affordances are the possible interactions between people and the environment. Some affordances are perceivable, others are not.
  • Perceived affordances often act as signifiers, but they can be ambiguous.
  • Signifiers signal things, in particular what actions are possible and how they should be done. Signifiers must be perceivable, else they fail to function.

Mapping: When the mapping uses spatial correspondence between the layout of the controls and the devices being controlled, it is easy to determine how to use them

Feedback

  • Requirements: immediate, informative, just the right amount.

Conceptual Models

  • A conceptual model is an explanation, usually highly simplified, of how something works.
  • It doesn’t have to be complete or even accurate as long as it is useful.
  • Can be explained to user, or learned by experience.
  • Bad design: When controls suggest a false conceptual model (e.g. refrigerator with 2 controls for freezer / fridge, but are both affected by either control).

System Image

Definition: The system image is what can be derived from the physical structure that has been built (including documentation).

Norman, D. A. (1986). Cognitive engineering.

Goals:

  1. To understand the fundamental principles behind human action and performance that are relevant for the development of engineering principles of design.
  2. To devise systems that are pleasant to use - the goal is neither efficiency nor ease nor power, although these are all to be desired, but rather systems that are pleasant, even fun

Psychological Variables Differ From Physical Variables: In many situations, the variables easily controlled are not those that the user cares about.

  1. Mapping problems: Which control controls what?
  2. Ease of control
  3. Evaluation - determine if correct outcome has been reached.

Gulf of Execution / Evaluation

Week 2

MacKenzie, I.S. (2013). Chapter 4: Scientific Foundations. Human-Computer Interaction: An Empirical Research Perspective.

MacKenzie, I.S. (2013). Chapter 4: Scientific Foundations. Human-Computer Interaction: An Empirical Research Perspective. (pp. 121-152). Waltham, MA: Elsevier.

What is research?

3 definitions:

  1. Careful or diligent search - "Search" is key term, trying to find things.
  2. Collecting information about a particular subject - data gathering of a phenomenon.
  3. Research is investigation or experimentation aimed at the discovery and interpretation of facts and revision of accepted theories or laws in light of new facts.
    • In HCI, Experiment = user study
    • Empirical research - encompasses both experimental and non-experiment methods (e.g. buildilng interaction models)
    • Facts - what we seek in experimental research.
    • Theory - hypothesis of a phenomenon
    • Law - More constraining, accepted. e.g. Fitts' law of human motor behavior in HCI domain.

Additional characteristics of research:

  1. Research must be published - Why? Must extend, refine or revise the existsing body of knowledge in the field.
  2. Citations, references, impact - Connects ideas to other ideas. Suports intellectual honesty. Back up assertions.
    • Number of citations to paper = impact (e.g. H-index - quantifies both research prductivity and overall impact of a body of work)
  3. Reproducibility - Research that cannot be replicated is useless

Research versus engineering versus design

  • Engineers and designers are in the business of building things
    • Trade-off: form (design emphasis) and function (engineering emphasis)
  • Research: Narrow focus, small ideas conceived, prototyped, tested, advanced or discarded.
    • research prototype = mockups, not actual products.
    • "Prototypes should command only as much time, effort, and investment as are needed to generate useful feedback and evolve an idea."
    • "Researchers provide the raw materials and processes engineers and designers work with"

What is empirical resaerch?

Definitions:

  1. Originating in or based on observation or experience.
  2. Relying on experience or observation alone, often without due regard for system and theory
  3. Capable of being verified or disproved by observation or experiment

Research methods

Observational

  • What: interviews, field investigations, contextual inquiries, case studies, field studies, focus groups,
  • More qualitative
  • Achieves relevance while sacrificing precision - Real world phenomena are high in relevance, but lack the precision available in controlled laboratory experiments.
  • Focus on why and how

Experimental

  • What: Controlled experiments. Include manipulated variable and response variable (independent vs dependent)
  • Comparison of manipulated variables is key, otherwise not experimental research.
  • The relationship between the independent variable and the dependent variable is one of cause and effect

Correlational

  • What: Look for relationships between variables (e.g. age, income, gender)
  • How: Observation, interviews, surveys, etc.
  • Correlational methods provide a balance between relevance and precision

Observe and measure

How are observations made:

  1. Another human as observer - manual entry
  2. An apparatus is the observer - automatic logs by a computer

Measurement scales:

  1. Nominal - arbitrarily assigning a code to an attribute or a category (license plate numbers, zip codes)
    • Used to count frequency
  2. Ordinal - provide an order or ranking to an attribute
    • Implies ranking
    • Comparison of greater than or less than are possible.
    • Not valid to compute the mean of ordinal data.
  3. Interval - equal distances between adjacent values, but no absolute zero (e.g. temperature Fahrenheit or Celsius)
    • used in questionnaires where a response on a linear scale is solicited (e.g. Likert scale)
  4. Ratio - Ratio data have an absolute zero and support a myriad of calculations to summarize, compare, and test the data.
    • Mathetmatical operations and stats are possible (add/subtract/mean/stdev)
    • Example: Time, occurrence counts
    • normaliation: standardizes and makes easier for comparison (e.g. words-per-minute, error-rate)

Research questions

What: Conduct experimental research to answer (and raise) questions about a new or existing user interface or interaction technique.

Difficulty: People exhibit variable behavior, which affects confidence in our findings.

Questions:

  • Is the new technique any good?
  • Is the new technique better than (interface)?
  • Is the new technique faster than (interface)?
  • Is the new technique faster than (inteface) after a bit of practice?
  • Is the measured entry speed (in words per minute) higher for the new technique than for a (interface) after one hour of use?

Internal validity and external validity

Definition: Accuracy of answer (internal) vs breadth of question (external)

  1. Internal validity (definition) is the extent to which an effect observed is due to the test conditions.
    • Why? We want confidence that the difference observed was actually due to inherent differences between the tech- niques.
  2. External validity (definition) is the extent to which experimental results are generalizable to other people and other situations.
    • Why? To the extent the research pursues broadly framed questions, the results tend to be broadly applicable.

Tradeoffs:

  • Effort to improve external validity through environmental considerations may negatively impact internal validity.
  • The desire to improve external validity through procedural considerations may nega- tively impact internal validity.

Ecological validity vs external validity:

  • Ecological = Methodology (using materials, tasks, and situations typical of the real world)
  • External = Outcome (obtaining results that generalize to a broad range of people and situations).

Comparative evaluations

Takeaway: "A comparative evaluation yields more valuable and insightful results than a single-interface evaluation"

Relationships: circumstantial and causal

Causal relationship: "condition manipulated in the experiment caused the changes in the human responses that were observed and measured"

  • Different from circumstantial (e.g. cigarettes and cancer)
  • Examined by controlled experiments, where only one variable is changed.
  • Caveat: If the variable manipulated is a naturally occurring attribute of participants, then cause and effect conclusions are unreliable.
    • e.g. gender (female, male), person- ality (extrovert, introvert), handedness (left, right)

Research topics

Finding a topic:

  1. Think small - Narrow down the problem to sub-problems.
  2. Replicate - Replicate an existing experiment from literature. This is an empowering process.
  3. Know the literature
  4. Think inside the box - Just get on with your day, but at every juncture, every interaction, think and question. What happened? Why did it happen? Is there an alternative?

Müller, H., Sedley, A., & Ferrall-Nunge, E. (2014). Survey research in HCI

Müller, H., Sedley, A., & Ferrall-Nunge, E. (2014). Survey research in HCI. In J. Olson & W. Kellogg (Eds.) Ways of Knowing in HCI (pp. 229-266). New York: Springer.

What Questions the Method Can Answer

  1. Measure attitudes
  2. Measure intent
  3. Quantify task success
  4. UX feedback
  5. User characteristics - understand a system's users
  6. Interactions with technology - how users interact with technology in broad terms (social, demographic)
  7. Awareness - helps understand people's awareness of existing technologies
  8. Comparisons - compare users' attitudes / perceptions / experiences across segments, time, geographies, etc.

When to avoid surveys

  1. Precise behaviors - gather from log data instead
  2. Underlying motiviations - users often don't know motivation. Use ethnography or contextual inquiry instead.
  3. Usability evaluations - why users succeeded / failed in a task. Use interviews instead

How to Survey

Research goals and constructs

  • Do the survey constructs focus on results which will directly address research goals and inform stakeholders’ decision making rather than providing merely informative data?
  • Will the results be used for longitudinal comparisons or for one-time decisions?
  • What is the number of responses needed to provide the appropriate level of precision for the insights needed?

Population and sampling

  • Random sampling is best, minimizes sampling bias. e.g. random phone number, address-based surveys.
  • Non-probability sampling - snowball recruiting, convenience samples (target ppl easily available). Higher potential for bias.
  • Choosing sample size - determine margin of error. Commonly used are 3-5%. Confidence level indicates how likely the reported metric falls within the margin of error. Typically 95%.

Questionnaire design and biases

Common biases: Satisficing - Respondents use suboptimal amount of effort.

- Respondents are more likely to satisfice when (Krosnick, 1991):
    - Cognitive ability to answer is low.
    - Motivation to answer is low.
    - Question difficulty is high at one of the four stages, resulting in cognitive exertion.
- Avoid by:
    - Keeping answers concise
    - Avoid using same rating scale in series
    - Avoid long surveys
    - Explain importance of survey
    - Avoid trap questions (e.g. "enter 5 in the following box")

Acquiescence Bias - Respondents want to please the surveyer.

- Avoid by:
    1. Using agree/disagree, yes/no, true/false answers
    2. Ask Qs about the underlying construct (?)
    3. Use reverse-keyed constructs (asking same construct both positive and negative).

Social Desirability - respondents answer questions in a manner they feel will be positively perceived by others

- Avoid by allowing anonymous answers.

Response Orer Bias - tendency to select the items toward the beginning or the end of an answer or scale.

Question Order Bias - Each question in a survey has the potential to bias each subsequent question by priming respondents

Review and survey pretesting

Cognitive Pretesting - take the survey while using the think-aloud protocol (similar to a usability study).

Field Testing - Piloting the survey with a small subset of the sample

Implementation and launch

Monitoring Survey Paradata

  • Click-through rate: Of those invited, how many opened the survey.
  • Completion rate: Of those who opened the survey, how many finished the survey.
  • Response rate: Of those invited, how many finished the survey.
  • Break-off rate: Of those who started, how many dropped off on each page. • Completion time: The time it took respondents to finish the entire survey.

Maximizing response rates: "Total Design Method":

  1. Week 1: Initial request with survey
  2. Week 2: Reminder postcard
  3. Week 4: Replacement survey to non-respondents
  4. Week 7: Second replacement survey to non-respondents.

One strategy to maximize the benefit of incentives is to offer a small non-contingent award to all invitees, followed by a larger contingent award to initial non-respondents (Lavrakas, 2011).

Data analysis and reporting

Cleaning:

  1. Dedupe
  2. Remove "speeders"
  3. Remove "straight liners"
  4. Fix missing data

Assessment:

  1. Low inter-item reliability - Respondents that give inconsistent or unreliable responses may signify they were not paying attention to questions.
  2. Outliers - 2 to 3 standard deviations from the mean.
  3. Inadequate open-ended responses - often lead to low quality response.

Hypothesis testing - probability of a hypothesis being true when comparing groups (using t-test, ANOVA, Chi-square)

Inferential statistics can also be applied to identify connections among variables:

  1. Bivariate correlations are widely used to assess linear relationships between variables.
  2. Linear regression - proportion of variance in a continuous dependent variable.
  3. Logistic regression - predict change in probability of getting a particular value in a binary variable.
  4. Decision trees - probabilities of reaching specific outcomes
  5. Factor analysis - identify groups of covariates, reduce large number of variables into smaller set.
  6. Cluster analysis - categorizing segments.

Analysing Open-ended Responses:

  1. Coding - transform qualitative data to quantative
  2. Interrater reliability - Cohens Kappa

Week 3

Week 4

Week 5

Week 6

Week 7

Week 8

Week 9

Week 10